本日重點與方向 (TAG): kubernetes、k8s、PV、persistent volume、PVC、persistent volume claim、Pod use PVC、Rancher、Longhorn、SC、StorageClass
今天將會介紹使用 Bare Metal 進行 Kubernetes 環境中持久化儲存的軟體功能測試,主要會是以 Rancher 開源專案 Longhorn 進行驗證,並對於昨日組建的 kubernetes 功能進行整合,如果你中途裝一裝有一些異常與失敗的話,基本上就是參考昨天的安裝筆記重建叢集,配置上基本上就是相同的, Longhorn 官方安裝基本上 yaml 下載就可以部署,之後就弄好啦,有副本需求相關的資源型態設定,他的設計的概念是基於 StorageClass 的配置,當然還有一些進階一點的去看官方網站 吧 (繼續甩鍋) ,詳細的操作還是去看看大陸那邊的吧。
cat /etc/kubernetes/manifests/kube-apiserver.yaml
> - --allow-privileged=true
apt-get install open-iscsi
service iscsid start
service iscsid status
kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/longhorn.yaml
kubectl get pods --namespace longhorn-system
snap install helm3
kubectl create namespace longhorn-system
kubectl config view --raw > ./kubeconfig.yaml
git clone https://github.com/longhorn/longhorn
helm3 install longhorn ./longhorn/chart/ --namespace longhorn-system --kubeconfig ./kubeconfig.yaml
https://www.bookstack.cn/read/longhorn-0.8.1-en/415e7720ab3e8fa6.md
https://kubernetes.github.io/ingress-nginx/deploy/
USER=admin;
PASSWORD=admin;
echo "${USER}:$(openssl passwd -stdin -apr1 <<< ${PASSWORD})" >> auth
kubectl -n longhorn-system create secret generic basic-auth --from-file=auth
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.35.0/deploy/static/provider/baremetal/deploy.yaml
longhorn-ingress.yml
-----
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: longhorn-ingress
namespace: longhorn-system
annotations:
# type of authentication
nginx.ingress.kubernetes.io/auth-type: basic
# name of the secret that contains the user/password definitions
nginx.ingress.kubernetes.io/auth-secret: basic-auth
# message to display with an appropriate context why the authentication is required
nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required '
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: longhorn-frontend
servicePort: 80
kubectl -n longhorn-system apply -f longhorn-ingress.yml
kubectl get svc -A | grep "nginx"
root@sdn-k8s-b2-1-2:~# kubectl get svc -A | grep "nginx"
ingress-nginx ingress-nginx-controller NodePort 10.105.219.176 <none> 80:30416/TCP,443:30412/TCP 91m
ingress-nginx ingress-nginx-controller-admission ClusterIP 10.98.187.185 <none> 443/TCP 91m
kubectl get storage
longhorn-pvc.yaml
-----
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: longhorn-volv-pvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: longhorn
resources:
requests:
storage: 5Gi
kubectl apply -f longhorn-pvc.yaml
kubectl get pv,pvc
spec.volumes.persistentVolumeClaim.claimName
要對應先前部署的 PVCpod-use-longhorn-pvc.yaml
-----
apiVersion: v1
kind: Pod
metadata:
name: volume-test
spec:
containers:
- name: volume-test
image: ubuntu
imagePullPolicy: IfNotPresent
command:
- sleep
- "6000000"
volumeMounts:
- name: volv
mountPath: "/data"
volumes:
- name: volv
persistentVolumeClaim:
claimName: longhorn-volv-pvc
kubectl apply -f pod-use-longhorn-pvc.yaml
kubectl get pod -o wide
kubectl exec -ti volume-test -- df -hT
lsblk
replicas
,並且名稱會與 PVC 生成的 PV 名稱類似,在後面加上 -r-<hash>
,官方預設副本數量是 3,因為他是動態生成的,加上是由 longhorn 操縱,所以會在 longhorn-system
Namespace 之下。kubectl get pv,pvc
kubectl get replicas -n longhorn-system
設定路徑: Longhorn Web UI > Volumes > Volumm Name (點擊)
這邊可以單獨觀看 replicas 狀態,需要知道它放在哪一個節點就是在這邊看。
kubectl describe <replicas-name> -n longhorn-system
kubectl get replicas -n longhorn-system -o json | jq '[.items | .[] | {nodeName: .status.ownerID, replicaStatus: .status.currentState, instanceManager: .status.instanceManagerName, replicasPath: .spec.dataPath}]'
Longhorn 限制在 PVC 狀態會是 Read Write Once,所以跨節點就會爆炸,這邊就讓它炸一下,官方有提可以使用 NFS 讓 Longhorn 變成 RWX 狀態(這邊),就請有興趣的去折騰一下吧。
apiVersion: v1
kind: Pod
metadata:
name: volume-test
spec:
containers:
- name: volume-test
image: ubuntu
imagePullPolicy: IfNotPresent
command:
- sleep
- "6000000"
volumeMounts:
- name: volv
mountPath: "/data"
volumes:
- name: volv
persistentVolumeClaim:
claimName: longhorn-volv-pvc
deployment-use-longhorn-pvc.yaml
-----
apiVersion: apps/v1
kind: Deployment
metadata:
name: ubuntu-deployment
labels:
app: ubuntu
spec:
replicas: 10
selector:
matchLabels:
app: ubuntu
template:
metadata:
labels:
app: ubuntu
spec:
containers:
- name: ubuntu-deployment
image: ubuntu
imagePullPolicy: Always
command:
- sleep
- "6000000"
volumeMounts:
- name: ubuntu-pv
mountPath: /data
volumes:
- name: ubuntu-pv
persistentVolumeClaim:
claimName: longhorn-volv-pvc
kubectl apply -f deployment-use-longhorn-pvc.yaml
kubectl get pod -o wide
這邊的
node-name
自己改節點名稱一下。
deployment-use-longhorn-pvc.yaml
-----
apiVersion: apps/v1
kind: Deployment
metadata:
name: ubuntu-deployment
labels:
app: ubuntu
spec:
replicas: 10
selector:
matchLabels:
app: ubuntu
template:
metadata:
labels:
app: ubuntu
spec:
containers:
- name: ubuntu-deployment
image: ubuntu
imagePullPolicy: Always
command:
- sleep
- "6000000"
volumeMounts:
- name: ubuntu-pv
mountPath: /data
nodeDelector:
kubernetes.io/hostname: <node-name>
volumes:
- name: ubuntu-pv
persistentVolumeClaim:
claimName: longhorn-volv-pvc
kubectl apply -f deployment-use-longhorn-pvc.yaml
kubectl get pod -o wide
非常不意外的,這邊的限縮就是老方法,用標籤搞定一切的作法,只是這邊是使用 kubernetes 的
annotation
去設定的,而並非是單純的 Node Label 機制,加上他的標籤有點其他的 bug (指令手動上非動態調整,而你用 Web UI 就可以),我們這邊就搶先試毒(採坑)一下吧,這邊會在做 Storage Class 的修改,Longhorn 會在依照 Storage Class 需求去做部署,如果不符合 Storage Class 就會導致副本數量不足。
parameters.numberOfReplicas 改成你的副本數量
longhorn-limit-replicas-sc.yaml
-----
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: longhorn-set-replicas
provisioner: driver.longhorn.io
allowVolumeExpansion: true
parameters:
numberOfReplicas: "2"
staleReplicaTimeout: "2880"
fromBackup: ""
# diskSelector: "ssd,fast"
# nodeSelector: "storage,fast"
parameters.nodeSelector 對應 Node 對於 Longhorn 的 Node Tag
longhorn-replicas-setplacement-sc.yaml
-----
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: longhorn-node-label
provisioner: driver.longhorn.io
allowVolumeExpansion: true
parameters:
numberOfReplicas: "3"
staleReplicaTimeout: "2880"
fromBackup: ""
nodeSelector: "storage,fast"
設定路徑: Longhorn Web UI > Nodes > Operation
"fast","storage"
拿來改,這邊就是 Longhorn 抓的節點標籤了。kubectl annotate node <node-name> node.longhorn.io/default-node-tags='["fast","storage"]'
基本上就是對應
storageClassName
去做設定
longhorn-limit-replicas-pvc.yaml
-----
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: longhorn-limit-pvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: longhorn-set-replicas
resources:
requests:
storage: 10Gi
longhorn-replicas-setplacement-pvc.yaml
-----
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: longhorn-node-label-pvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: longhorn-label
resources:
requests:
storage: 10Gi
apiVersion: v1
kind: Pod
metadata:
name: volume-test
spec:
containers:
- name: volume-test
image: ubuntu
imagePullPolicy: IfNotPresent
command:
- sleep
- "6000000"
volumeMounts:
- name: volv
mountPath: "/data"
volumes:
- name: volv
persistentVolumeClaim:
claimName: longhorn-limit-pvc
apiVersion: v1
kind: Pod
metadata:
name: volume-test
spec:
containers:
- name: volume-test
image: ubuntu
imagePullPolicy: IfNotPresent
command:
- sleep
- "6000000"
volumeMounts:
- name: volv
mountPath: "/data"
volumes:
- name: volv
persistentVolumeClaim:
claimName: longhorn-node-label-pvc